本文介绍了Z-Code ++,这是一种针对抽象文本摘要优化的新的预训练的语言模型。该模型使用三种技术扩展了艺术编码器模型的状态。首先,我们使用两阶段的预训练过程来改善模型在低资源摘要任务上的性能。该模型首先是使用文本语料库进行语言理解的预先培训的,然后在汇总语料库中不断预先培训,以进行基础文本生成。其次,我们用分离的注意力层代替编码器中的自我发项层,其中每个单词都使用两个向量分别代表其内容和位置。第三,我们使用融合编码器,这是一种以层次方式编码长序列的简单而有效的方法。 Z-Code ++在13个文本摘要任务中的9个跨5种语言中创建了新的艺术状态。我们的模型的参数有效,因为它的表现优于XSUM上600倍较大的Palm-540b,并且在Samsum上的易经的200倍GPT3-175B较大。在零射击和少量设置中,我们的模型大大优于竞争模型。
translated by 谷歌翻译
计算机辅助诊断(CAD)系统可以为皮肤病的临床诊断提供参考。卷积神经网络(CNN)不仅可以提取视觉元素,例如颜色和形状,而且还可以提取语义特征。因此,他们在皮肤镜检查图像的许多任务中取得了重大改进。皮肤镜检查的成像没有主要方向,表明数据集中有大量的皮肤病变靶旋转。然而,CNN缺乏抗旋转能力,这必然会影响CNN的特征提取能力。我们提出了一个旋转平均值(RM)网络,以从皮肤镜图像中提取旋转不变性特征。在RM中,每组旋转的特征地图对应于一组重量共享卷积输出,并使用MeanOut操作融合以获取最终特征图。通过理论推导,提出的RM网络是旋转等值的,并且在全球平均池(GAP)操作之后,可以提取旋转不变的特征。提取的旋转不变特征可以更好地代表皮肤镜图像的分类和检索任务中的原始数据。提出的RM是一般操作,它不会改变网络结构或增加任何参数,并且可以灵活地嵌入CNN的任何部分。大量实验是在皮肤镜检查图像数据集上进行的。结果表明,我们的方法优于其他抗旋转方法,并在皮肤镜检查图像分类和检索任务方面取得了重大改进,表明在皮肤镜图像领域旋转不变性的潜力。
translated by 谷歌翻译
Medical image segmentation (MIS) is essential for supporting disease diagnosis and treatment effect assessment. Despite considerable advances in artificial intelligence (AI) for MIS, clinicians remain skeptical of its utility, maintaining low confidence in such black box systems, with this problem being exacerbated by low generalization for out-of-distribution (OOD) data. To move towards effective clinical utilization, we propose a foundation model named EvidenceCap, which makes the box transparent in a quantifiable way by uncertainty estimation. EvidenceCap not only makes AI visible in regions of uncertainty and OOD data, but also enhances the reliability, robustness, and computational efficiency of MIS. Uncertainty is modeled explicitly through subjective logic theory to gather strong evidence from features. We show the effectiveness of EvidenceCap in three segmentation datasets and apply it to the clinic. Our work sheds light on clinical safe applications and explainable AI, and can contribute towards trustworthiness in the medical domain.
translated by 谷歌翻译
Neural network language model (NNLM) plays an essential role in automatic speech recognition (ASR) systems, especially in adaptation tasks when text-only data is available. In practice, an NNLM is typically trained on a combination of data sampled from multiple corpora. Thus, the data sampling strategy is important to the adaptation performance. Most existing works focus on designing static sampling strategies. However, each corpus may show varying impacts at different NNLM training stages. In this paper, we introduce a novel adaptive multi-corpora training algorithm that dynamically learns and adjusts the sampling probability of each corpus along the training process. The algorithm is robust to corpora sizes and domain relevance. Compared with static sampling strategy baselines, the proposed approach yields remarkable improvement by achieving up to relative 7% and 9% word error rate (WER) reductions on in-domain and out-of-domain adaptation tasks, respectively.
translated by 谷歌翻译
逆文本归一化(ITN)用于将自动语音识别(ASR)系统的口语输出转换为书面形式。传统手工制作的ITN规则可以复杂地转录和维护。同时,神经建模方法需要与ASR系统相同或相似的域(内域数据)中的质量大规模口语写作示例。这两种方法都需要昂贵且复杂的注释。在本文中,我们提出了一种数据增强技术,该技术可有效地从室外文本数据中产生丰富的口语写入数字对,并以最少的人类注释。我们从经验上证明,使用我们的数据增强技术训练的ITN模型始终超过ITN模型,该模型仅使用14.44%的总体准确性,仅在所有数字表面(例如红衣主教,货币和分数)上使用内域数据进行训练。
translated by 谷歌翻译
尽管脑肿瘤分割的准确性最近有所提高,但结果仍然表现出较低的置信度和稳健性。不确定性估计是改变这种情况的一种有效方法,因为它提供了对分割结果的信心。在本文中,我们提出了一个可信赖的脑肿瘤分割网络,该网络可以产生可靠的分割结果和可靠的不确定性估计,而不会过多的计算负担和骨干网络的修改。在我们的方法中,不确定性是使用主观逻辑理论明确建模的,该理论将主干神经网络的预测视为主观观点,通过将分割的类概率参数视为差异分布。同时,可信赖的分割框架学习了从功能中收集可靠证据的功能,从而导致最终分割结果。总体而言,我们统一的可信赖分割框架使该模型具有可靠性和鲁棒性,对分布式样本。为了评估我们的模型在鲁棒性和可靠性方面的有效性,在Brats 2019数据集中进行了定性和定量实验。
translated by 谷歌翻译
今天的大部分AI系统都专注于使用自我关注机制和变压器架构在大量多样化的数据中实现令人印象深刻的性能收益。在本文中,我们建议使用外部注意机制增强变压器架构,以带来外部知识和背景。通过将外部信息集成到预测过程中,我们希望减少对更大的模型的需求,并增加AI系统的民主化。我们发现所提出的外部注意机制可以显着提高现有AI系统的性能,使从业者可以轻松地将基础AI模型自定义到许多不同的下游应用程序。特别是,我们专注于勤杂朗语推理的任务,展示所提出的外部注意机制可以增加现有的变压器模型,并显着提高模型的推理能力。拟议的系统,知识外部关注推理(Kear),达到了开放的铜商QA研究基准的人类奇偶校验,其准确性为89.4 \%,与人类准确性为88.9 \%。
translated by 谷歌翻译
自动视觉解对我们多样化和开放的世界需要计算机视觉模型,以概括为特定任务的最小定制,类似于人类视力。计算机视觉基础型号培训,培训多样化,大型数据集,可以适应各种下游任务,对该任务来解决现实世界计算机视觉应用而言至关重要。虽然现有的视觉基础模型如剪辑,对齐和吴道2.0主要集中在映射图像和文本表示到跨模型共享表示,我们介绍了一台新的计算机视觉基础模型,佛罗伦萨,扩大粗糙的表示(现场)到精细(对象),从静态(图像)到动态(视频),以及从RGB到多个模态(标题,深度)。通过从Web级图像文本数据中纳入通用视觉语言表示,我们的佛罗伦萨模型可以很容易地适应各种计算机视觉任务,例如分类,检索,对象检测,VQA,图像标题,视频检索和动作识别。此外,佛罗伦萨在许多类型的转移学习中表现出出色的表现:全面采样的微调,线性探测,几次射击传输和用于新颖图像和物体的零拍摄传输。所有这些属性对于我们的视觉基础模型至关重要,以提供通用视觉任务。佛罗伦萨实现了新的最先进的导致44个代表性基准,例如Imagenet-1K零射击分类,最高1精度为83.74,最高5个精度为97.18,62.4地图上的Coco微调, 80.36在VQA上,动力学-600上的87.8。
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Interview has been regarded as one of the most crucial step for recruitment. To fully prepare for the interview with the recruiters, job seekers usually practice with mock interviews between each other. However, such a mock interview with peers is generally far away from the real interview experience: the mock interviewers are not guaranteed to be professional and are not likely to behave like a real interviewer. Due to the rapid growth of online recruitment in recent years, recruiters tend to have online interviews, which makes it possible to collect real interview data from real interviewers. In this paper, we propose a novel application named EZInterviewer, which aims to learn from the online interview data and provides mock interview services to the job seekers. The task is challenging in two ways: (1) the interview data are now available but still of low-resource; (2) to generate meaningful and relevant interview dialogs requires thorough understanding of both resumes and job descriptions. To address the low-resource challenge, EZInterviewer is trained on a very small set of interview dialogs. The key idea is to reduce the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator so that most parameters can be trained with ungrounded dialogs as well as the resume data that are not low-resource. Evaluation results on a real-world job interview dialog dataset indicate that we achieve promising results to generate mock interviews. With the help of EZInterviewer, we hope to make mock interview practice become easier for job seekers.
translated by 谷歌翻译